Skip to content

Conversation

@yfw
Copy link
Contributor

@yfw yfw commented Nov 24, 2025

What does this PR do ?

Updates vllm to 0.11.2, torch to 2.9, transformers to 4.57.1. Also updates Automodel to use main branch.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

Release Notes

  • Updates

    • Upgraded core dependencies: PyTorch (2.9.0), Transformers (4.57.1), and vLLM (0.11.2)
    • Added multi-node distributed training configuration support
  • Improvements

    • Enhanced vLLM integration robustness and compatibility
    • Optimized FP8 weight handling for improved efficiency

✏️ Tip: You can customize this high-level summary in your review settings.

@yfw yfw added the CI:L1 Run doctests, unit tests, and functional tests label Nov 24, 2025
@github-actions
Copy link

❌ Submodule Fast-Forward Check Failed

Check based on commit: 82b6f95 (PR #1563 from yifu/vllm0112_bump)

❌ Submodules that need attention:

Automodel: ❌ Commits have DIVERGED from a common ancestor
TARGET (main branch): https://github.com/NVIDIA-NeMo/Automodel/commits/a2db048383cd54b3fafc928df4c30bf7bbf7c430/
CURRENT (PR #1563 from yifu/vllm0112_bump): https://github.com/NVIDIA-NeMo/Automodel/commits/f9fc82c055e1cc69a68ff0bc7614aabe507a43ea/

Please ensure all submodule commits are fast-forwards of the main branch before merging.

Signed-off-by: Yi-Fu Wu <[email protected]>
@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 39a9b03 (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: fa2ccf4 (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: baf37d6 (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@yfw yfw added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Nov 27, 2025
@yfw yfw force-pushed the yifu/vllm0112_bump branch from cb2168a to eab6019 Compare December 1, 2025 19:35
@yfw yfw marked this pull request as ready for review December 1, 2025 19:37
@yfw yfw requested review from a team as code owners December 1, 2025 19:37
@yfw yfw requested a review from guyueh1 December 1, 2025 19:38
@github-actions
Copy link

github-actions bot commented Dec 1, 2025

✅ Submodule Fast-Forward Check Results

Check based on commit: b671719 (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 1, 2025

📝 Walkthrough

Walkthrough

This PR updates core dependencies (PyTorch 2.9.0, transformers 4.57.1, vLLM 0.11.2), refactors vLLM worker initialization to use dynamic file lookup instead of hardcoded paths, introduces in-place FP8 weight post-processing, and migrates test configurations from single-node to two-node setups.

Changes

Cohort / File(s) Summary
Dependency Updates
pyproject.toml, tools/build-custom-vllm.sh
Bumped PyTorch from 2.8.0 to 2.9.0, transformers from ≥4.55.4 to ≥4.57.1, and vLLM from 0.11.0 to 0.11.2; updated CUDA wheel index from cu128 to cu129.
Submodule Updates
.gitmodules, 3rdparty/Automodel-workspace/Automodel
Changed Automodel submodule branch to yifu/bump-torch-and-hf and updated commit pointer.
Configuration & Test Updates
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml, tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh, tests/test_suites/nightly.txt
Updated checkpoint/log paths from 1n8g to 2n8g, set cluster.num_nodes: 2, changed NUM_NODES from 1 to 2, and switched nightly test reference from 1n8g to 2n8g configuration.
FP8 Weight Processing
nemo_rl/models/generation/fp8.py
Added maybe_post_process_fp8_weight_block() function for in-place FP8 weight and scale re-quantization; adjusted process_weights_after_loading() to call the new function without extra parameters.
vLLM Worker Initialization
nemo_rl/models/generation/vllm/vllm_worker.py
Replaced hardcoded vLLM import paths with dynamic file lookup via importlib.find_spec(); introduced _get_vllm_file() helper and two new patching functions (_patch_vllm_init_workers_ray() and _patch_vllm_vit_flash_attn_backend()) for robust version-agnostic patching.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • vllm_worker.py: Carefully verify dynamic file lookup logic handles vLLM versions robustly and that patched behaviors (Ray executor env variables, attention backend override) remain correct.
  • fp8.py: Ensure in-place weight/scale updates preserve gradient flow and model training behavior; confirm compatibility with DeepGemm E8M0 quantization.
  • Dependency compatibility: Cross-check PyTorch 2.9.0, transformers 4.57.1, and vLLM 0.11.2 compatibility across different compute environments and CUDA versions (cu129).
  • Multi-node test configurations: Validate that 2-node setup produces expected distributed behavior and resource allocation.

Possibly related PRs

Suggested reviewers

  • terrykong
  • parthchadha
  • guyueh1

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning PR introduces major dependency bumps (torch 2.9.0, transformers 4.57.1, vllm 0.11.2) and code changes to FP8 quantization and vLLM workers, but PR description contains no test results, performance benchmarks, or compatibility verification. Review comments flag incompatibility issues with this dependency combination. Document comprehensive test results in PR description including: integration test results with updated dependencies, training convergence verification, 2-node test configuration results, FP8 quantization validation, and confirmation that torch 2.9.0 + transformers 4.57.1 combination has been tested.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: dependency version bumps for vllm, torch, and transformers, which are the primary modifications across multiple files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch yifu/vllm0112_bump

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25ff3f6 and b671719.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (9)
  • .gitmodules (1 hunks)
  • 3rdparty/Automodel-workspace/Automodel (1 hunks)
  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml (2 hunks)
  • nemo_rl/models/generation/fp8.py (2 hunks)
  • nemo_rl/models/generation/vllm/vllm_worker.py (2 hunks)
  • pyproject.toml (5 hunks)
  • tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh (1 hunks)
  • tests/test_suites/nightly.txt (2 hunks)
  • tools/build-custom-vllm.sh (1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.sh

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.sh: Use uv run instead of python to execute scripts
Follow the Google Shell Style Guide for shell scripts

Files:

  • tools/build-custom-vllm.sh
  • tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • tools/build-custom-vllm.sh
  • pyproject.toml
  • .gitmodules
  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
  • tests/test_suites/nightly.txt
  • tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
  • nemo_rl/models/generation/vllm/vllm_worker.py
  • nemo_rl/models/generation/fp8.py
  • 3rdparty/Automodel-workspace/Automodel
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • tools/build-custom-vllm.sh
  • tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
  • nemo_rl/models/generation/vllm/vllm_worker.py
  • nemo_rl/models/generation/fp8.py
examples/configs/recipes/**/*.yaml

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

When adding support for a new model, create a recipe YAML under examples/configs/recipes/ in the appropriate domain subdirectory (llm, vlm, etc.)

Files:

  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
examples/configs/recipes/llm/*.yaml

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Recipe YAML files should follow the naming pattern: --ng-[-modifiers][-long][.vN].yaml for LLM recipes

Files:

  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
tests/test_suites/nightly.txt

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

When adding a nightly test for a new model, append the driver script path (relative to tests/test_suites/) to tests/test_suites/nightly.txt

Files:

  • tests/test_suites/nightly.txt
tests/test_suites/**/*.sh

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

tests/test_suites/**/*.sh: When adding support for a new model, create a corresponding driver shell script under tests/test_suites/ in the matching domain
Driver shell scripts should match the YAML base name with .sh extension and invoke training entrypoint with uv run

Files:

  • tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
  • nemo_rl/models/generation/fp8.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
  • nemo_rl/models/generation/fp8.py
🧠 Learnings (6)
📚 Learning: 2025-11-24T17:24:41.976Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:24:41.976Z
Learning: Applies to examples/configs/recipes/llm/*.yaml : Recipe YAML files should follow the naming pattern: <algo>-<model>-<nodes>n<gpus>g-<strategy-and-params>[-modifiers][-long][.vN].yaml for LLM recipes

Applied to files:

  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
📚 Learning: 2025-11-24T17:24:41.976Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:24:41.976Z
Learning: Applies to examples/configs/recipes/vlm/*.yaml : Recipe YAML files should follow the naming pattern: vlm_<algo>-<model>-<nodes>n<gpus>g-<strategy>[-modifiers][.vN].yaml for VLM recipes

Applied to files:

  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
📚 Learning: 2025-09-18T13:26:43.307Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-8b-base-2n8g-fsdp2tp2.v1.yaml:19-26
Timestamp: 2025-09-18T13:26:43.307Z
Learning: In on-policy distillation workflows, validation can use downstream task performance (like math problem solving) as RL-like reward metrics rather than traditional distillation metrics like KL divergence. In this case, "val_reward" with "higher_is_better: true" is the correct checkpoint monitoring configuration.

Applied to files:

  • examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
📚 Learning: 2025-11-24T17:24:41.976Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:24:41.976Z
Learning: Applies to tests/test_suites/nightly.txt : When adding a nightly test for a new model, append the driver script path (relative to tests/test_suites/) to tests/test_suites/nightly.txt

Applied to files:

  • tests/test_suites/nightly.txt
📚 Learning: 2025-10-12T14:46:57.171Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1324
File: tests/test_suites/llm/distillation-qwen3-32b-to-1.7b-base-1n8g-megatron-tp2pp2cp2-pack.sh:6-11
Timestamp: 2025-10-12T14:46:57.171Z
Learning: Test scripts in tests/test_suites/llm/ follow a standard configuration pattern that includes NUM_NODES, STEPS_PER_RUN, MAX_STEPS, NUM_RUNS (calculated as `$(( (MAX_STEPS + STEPS_PER_RUN - 1) / STEPS_PER_RUN ))`), and NUM_MINUTES. These variables are part of the test infrastructure's standard interface and should not be flagged as unused even if not directly referenced within the individual script, as they are consumed by external launch tooling or common.env.

Applied to files:

  • tests/test_suites/nightly.txt
  • tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
📚 Learning: 2025-11-06T22:30:22.860Z
Learnt from: ZhiyuLi-Nvidia
Repo: NVIDIA-NeMo/RL PR: 1477
File: nemo_rl/models/generation/vllm/vllm_backend.py:163-168
Timestamp: 2025-11-06T22:30:22.860Z
Learning: For Ray actor methods in the vLLM generation worker code (vllm_backend.py, vllm_worker.py, vllm_worker_async.py), error handling should use print/traceback + return False pattern rather than raising exceptions, following the Ray RPC practice where exceptions may not propagate well across process boundaries.

Applied to files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
🪛 Ruff (0.14.6)
nemo_rl/models/generation/vllm/vllm_worker.py

177-181: Avoid specifying long messages outside the exception class

(TRY003)


187-192: Avoid specifying long messages outside the exception class

(TRY003)


221-221: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

🪛 Shellcheck (0.11.0)
tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh

[warning] 6-6: NUM_NODES appears unused. Verify use (or export if used externally).

(SC2034)

🔇 Additional comments (12)
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml (1)

9-9: LGTM—consistent 2-node config update with proper naming.

The changes correctly scale the configuration from 1-node to 2-node setup:

  • Directory/checkpoint identifiers updated consistently (1n8g → 2n8g).
  • cluster.num_nodes: 2 properly specified with gpus_per_node: 8 (line 59).
  • Filename follows the required pattern per coding guidelines: <algo>-<model>-<nodes>n<gpus>g-<strategy-and-params>[-modifiers][-long>[.vN].yaml.
  • Changes align with the broader PR context (test suite migrations and dependency updates).

Also applies to: 51-51, 56-56, 58-58

.gitmodules (1)

14-14: Clarify submodule branch intent—potential merge blocker.

Line 14 points to yifu/bump-torch-and-hf (a feature branch). Pinning submodules to feature branches creates build fragility: the branch can be deleted, have its history rewritten, or go stale, potentially breaking CI and releases.

Before merging, confirm:

  1. Is this feature branch intended to be temporary, or should it remain in production?
  2. If temporary, ensure the Automodel repo merges this branch to main (or another stable ref) before this PR merges.
  3. If this branch must remain referenced, update the PR description to document the dependency and expected lifetime.
3rdparty/Automodel-workspace/Automodel (1)

1-1: Verify that the new submodule commit is compatible with the dependency bumps.

The submodule pointer has been updated to a new commit. Given that this PR includes significant dependency bumps (torch 2.9, transformers 4.57.1, vLLM 0.11.2), please verify that the new Automodel commit (910f4e0402ec3af0c3b8642639f0347732067630) is compatible with these updated versions and does not introduce any breaking changes or version conflicts.

Consider verifying:

  • The new commit exists in the Automodel repository
  • What changes are included in the new commit and whether they align with the dependency upgrades
  • Whether any configuration or compatibility adjustments are needed in consuming code
tests/test_suites/nightly.txt (1)

19-19: LGTM!

The nightly test suite update correctly reflects the shift from 1n8g to 2n8g configuration, and the comment update properly describes the moonlight run section.

Also applies to: 42-42

tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh (1)

5-11: LGTM!

The NUM_NODES=2 update correctly aligns with the 2n8g configuration indicated in the filename. The ShellCheck warning about NUM_NODES being unused is a false positive—based on learnings, these variables are consumed by external launch tooling or common.env.

pyproject.toml (2)

60-60: LGTM!

The vllm version is consistently updated to 0.11.2 across all optional-dependencies sections (automodel, vllm, mcore).

Also applies to: 72-72, 95-95


106-108: LGTM!

The build dependency group correctly updates torch to 2.9.0, maintaining consistency with the main dependencies section.

tools/build-custom-vllm.sh (1)

69-69: LGTM!

The torch installation correctly updates to 2.9.0 with the cu129 wheel index, maintaining consistency with pyproject.toml. The xformers version (0.0.32.post1) on line 61 is appropriately updated to be compatible with torch 2.9.

nemo_rl/models/generation/fp8.py (2)

426-457: LGTM! In-place FP8 weight post-processing preserves weight_loader compatibility.

The implementation correctly:

  1. Checks if DeepGemm should be used before processing
  2. Uses .data.copy_() for in-place updates instead of creating new torch.nn.Parameter objects, preserving the weight_loader attribute needed for refit
  3. Properly references the vLLM source for traceability

The lazy imports are appropriate for optional vLLM dependencies.


459-484: LGTM!

The integration of maybe_post_process_fp8_weight_block at the end of process_weights_after_loading is correct. The call order ensures layer.weight_scale is properly initialized before the DeepGemm-specific post-processing is applied.

nemo_rl/models/generation/vllm/vllm_worker.py (2)

19-19: LGTM!

The import of find_spec is appropriate for the new dynamic vLLM file discovery approach.


168-194: LGTM!

The helper function provides robust runtime discovery of vLLM files with clear error messages. The detailed error messages flagged by ruff (TRY003) are actually beneficial for debugging installation and version issues.

Comment on lines +196 to +232
def _patch_vllm_init_workers_ray():
"""Patch the vLLM ray_distributed_executor.py file.
1. Pass custom runtime_env in _init_workers_ray call.
- This allows passing custom py_executable to worker initialization.
2. Add NCCL_CUMEM_ENABLE and NCCL_NVLS_ENABLE to vLLM ADDITIONAL_ENV_VARS.
- This is a workaround to fix async vllm in some scenarios.
- See https://github.com/NVIDIA-NeMo/RL/pull/898 for more details.
"""
file_to_patch = _get_vllm_file("v1/executor/ray_executor.py")

old_lines = [
"self._init_workers_ray(placement_group)",
'ADDITIONAL_ENV_VARS = {"HF_TOKEN", "HUGGING_FACE_HUB_TOKEN"}',
]
with open(file_to_patch, "r") as f:
content = f.read()

new_lines = [
f'self._init_workers_ray(placement_group, runtime_env={{"py_executable": "{self.py_executable}"}})',
'ADDITIONAL_ENV_VARS = {"HF_TOKEN", "HUGGING_FACE_HUB_TOKEN", "NCCL_CUMEM_ENABLE", "NCCL_NVLS_ENABLE", "RAY_ENABLE_UV_RUN_RUNTIME_ENV"}',
]
old_lines = [
"self._init_workers_ray(placement_group)",
'ADDITIONAL_ENV_VARS = {"HF_TOKEN", "HUGGING_FACE_HUB_TOKEN"}',
]

need_replace = False
for old_line, new_line in zip(old_lines, new_lines):
if new_line in content or old_line not in content:
continue
content = content.replace(old_line, new_line)
need_replace = True
new_lines = [
f'self._init_workers_ray(placement_group, runtime_env={{"py_executable": "{self.py_executable}"}})',
'ADDITIONAL_ENV_VARS = {"HF_TOKEN", "HUGGING_FACE_HUB_TOKEN", "NCCL_CUMEM_ENABLE", "NCCL_NVLS_ENABLE", "RAY_ENABLE_UV_RUN_RUNTIME_ENV"}',
]

need_replace = False
for old_line, new_line in zip(old_lines, new_lines):
if new_line in content or old_line not in content:
continue
content = content.replace(old_line, new_line)
need_replace = True

if not need_replace:
return

# Write back the patched content
with open(file_to_patch, "w") as f:
f.write(content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Code injection risk and concurrency hazard.

This function has several critical issues:

  1. Code injection vulnerability (Line 216): self.py_executable is directly embedded into an f-string that becomes Python source code. If the path contains quotes or special characters, it could break syntax or enable code injection.

  2. File modification race condition: Multiple workers or processes may attempt to patch the same installed vLLM files concurrently, risking corruption or incomplete patches.

  3. Persistent modifications: Changes to installed package files persist across runs and affect all projects using the same vLLM installation, potentially causing unexpected behavior.

  4. Missing strict parameter (Line 221): The zip() call should include strict=True for Python 3.10+.

Apply these fixes:

         new_lines = [
-            f'self._init_workers_ray(placement_group, runtime_env={{"py_executable": "{self.py_executable}"}})',
+            f'self._init_workers_ray(placement_group, runtime_env={{"py_executable": {self.py_executable!r}}})',
             'ADDITIONAL_ENV_VARS = {"HF_TOKEN", "HUGGING_FACE_HUB_TOKEN", "NCCL_CUMEM_ENABLE", "NCCL_NVLS_ENABLE", "RAY_ENABLE_UV_RUN_RUNTIME_ENV"}',
         ]
 
         need_replace = False
-        for old_line, new_line in zip(old_lines, new_lines):
+        for old_line, new_line in zip(old_lines, new_lines, strict=True):
             if new_line in content or old_line not in content:
                 continue

Additional recommendations:

  • Consider using vLLM's official extension or plugin mechanisms if available, rather than patching installed files.
  • Add file locking if the patching approach must be retained.
  • Document that this approach requires write permissions to the vLLM installation directory.
🧰 Tools
🪛 Ruff (0.14.6)

221-221: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

Comment on lines +234 to +281
def _patch_vllm_vit_flash_attn_backend():
"""Patch vLLM vision attention backend selection logic.
Modify the CUDA branch of maybe_get_vit_flash_attn_backend in
vllm.attention.layer to avoid overriding the backend when it
is already set to XFORMERS. This avoids flash attention related
errors when the ViT head dimension is not a multiple of 32.
Related issues:
- https://github.com/vllm-project/vllm/issues/27562
- https://github.com/vllm-project/vllm/issues/26989
This is properly fixed in https://github.com/vllm-project/vllm/pull/28763. We can remove this patch once we upgrade to a version of vllm that contains this fix.
"""
file_to_patch = _get_vllm_file("attention/layer.py")
with open(file_to_patch, "r") as f:
content = f.read()

old_snippet = (
" elif current_platform.is_cuda():\n"
" if (\n"
" attn_backend != AttentionBackendEnum.FLASH_ATTN\n"
" and check_upstream_fa_availability(torch.get_default_dtype())\n"
" ):\n"
" attn_backend = AttentionBackendEnum.FLASH_ATTN\n"
" use_upstream_fa = True\n"
)

new_snippet = (
" elif current_platform.is_cuda():\n"
" if (\n"
" attn_backend != AttentionBackendEnum.FLASH_ATTN\n"
" and attn_backend != AttentionBackendEnum.XFORMERS\n"
" and check_upstream_fa_availability(torch.get_default_dtype())\n"
" ):\n"
" attn_backend = AttentionBackendEnum.FLASH_ATTN\n"
" use_upstream_fa = True\n"
)

if not need_replace:
return
# Only patch if the file still has the old snippet and
# hasn't been patched already.
if new_snippet in content or old_snippet not in content:
return

# Write back the patched content
with open(file_to_patch, "w") as f:
f.write(content)
content = content.replace(old_snippet, new_snippet)

except (ImportError, FileNotFoundError, PermissionError):
# Allow failures gracefully
pass
with open(file_to_patch, "w") as f:
f.write(content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same file modification concerns apply.

This patch function has the same operational risks as _patch_vllm_init_workers_ray: concurrent modification hazards and persistent changes to installed packages.

Since the comment indicates this is fixed in vllm PR #28763, consider adding a version check to skip this patch when using vLLM versions that include the fix.

     def _patch_vllm_vit_flash_attn_backend():
         """Patch vLLM vision attention backend selection logic.
 
         Modify the CUDA branch of maybe_get_vit_flash_attn_backend in
         vllm.attention.layer to avoid overriding the backend when it
         is already set to XFORMERS. This avoids flash attention related
         errors when the ViT head dimension is not a multiple of 32.
 
         Related issues:
         - https://github.com/vllm-project/vllm/issues/27562
         - https://github.com/vllm-project/vllm/issues/26989
 
         This is properly fixed in https://github.com/vllm-project/vllm/pull/28763. We can remove this patch once we upgrade to a version of vllm that contains this fix.
         """
+        # TODO: Add version check to skip patching when using vLLM versions >= X.Y.Z that include PR #28763
         file_to_patch = _get_vllm_file("attention/layer.py")

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In nemo_rl/models/generation/vllm/vllm_worker.py around lines 234 to 281, add a
guard that checks the installed vLLM package version and skips applying this
patch when the vLLM version already includes PR #28763; implement by importing
importlib.metadata (or importlib_metadata for py<3.8) and
packaging.version.Version, retrieving the installed vllm version in a
try/except, comparing it to a MIN_VLLM_VERSION constant (set to the first
release that contains PR #28763), and if installed_version >= MIN_VLLM_VERSION
log an informational message and return without modifying files; ensure
PackageNotFoundError is handled (proceed with patch if vllm not installed) and
keep existing behavior otherwise.

Comment on lines +283 to +287
_patch_vllm_init_workers_ray()
logger.info("Successfully patched vllm _init_workers_ray.")

except (ImportError, AttributeError):
# vllm not installed or has a different structure, skipping patch.
pass
_patch_vllm_vit_flash_attn_backend()
logger.info("Successfully patched vllm vit flash attention backend.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling for patch invocations.

The patch functions can raise RuntimeError (from _get_vllm_file), but there's no error handling here. If patching fails, the entire worker initialization fails.

Based on learnings from previous reviews, Ray actor methods should handle errors gracefully rather than propagating exceptions.

Apply this diff:

-        _patch_vllm_init_workers_ray()
-        logger.info("Successfully patched vllm _init_workers_ray.")
-
-        _patch_vllm_vit_flash_attn_backend()
-        logger.info("Successfully patched vllm vit flash attention backend.")
+        try:
+            _patch_vllm_init_workers_ray()
+            logger.info("Successfully patched vllm _init_workers_ray.")
+        except Exception:
+            import traceback
+            logger.error("Failed to patch vllm _init_workers_ray:")
+            traceback.print_exc()
+            # Consider whether to continue or return based on criticality
+
+        try:
+            _patch_vllm_vit_flash_attn_backend()
+            logger.info("Successfully patched vllm vit flash attention backend.")
+        except Exception:
+            import traceback
+            logger.error("Failed to patch vllm vit flash attention backend:")
+            traceback.print_exc()
+            # This patch may not be needed for all vLLM versions, so continue

Based on learnings, Ray actor methods benefit from explicit error handling with print/traceback patterns.

🤖 Prompt for AI Agents
In nemo_rl/models/generation/vllm/vllm_worker.py around lines 283 to 287, the
two patch calls (_patch_vllm_init_workers_ray() and
_patch_vllm_vit_flash_attn_backend()) must be wrapped in explicit error handling
so exceptions from _get_vllm_file (RuntimeError or others) do not abort worker
init; add try/except around each patch (or a single try that calls both) that
catches Exception, calls traceback.print_exc(), and logs the failure with
logger.error including the exception information (or use logger.exception), then
continue initialization without re-raising so the Ray actor method does not
propagate the exception.

"setuptools",
"ninja", # for flash-attn parallel build
"torch==2.8.0",
"torch==2.9.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

torch 2.9.0 vllm 0.11.2 compatibility

💡 Result:

Short answer: Yes — vLLM v0.11.2 is compatible with PyTorch 2.9.0, but you must use a matching torch CUDA build (e.g. torch==2.9.0+cu129) or build vLLM from source if your CUDA/PyTorch build differs. [1][2]

Sources:

  • vLLM GitHub release notes (v0.11.1 → v0.11.2 is a patch; v0.11.x notes state default build uses PyTorch 2.9.0 + CUDA 12.9.1). [1]
  • vLLM installation / GPU requirements (binary CUDA/PyTorch build compatibility and recommendation to build from source if versions differ). [2]

References:
[1] https://github.com/vllm-project/vllm/releases (v0.11.2 / v0.11.1 notes).
[2] https://docs.vllm.ai/en/stable/getting_started/installation/gpu/


🌐 Web query:

transformers 4.57.1 torch 2.9.0 compatibility

💡 Result:

Short answer: usually yes — transformers 4.57.1 can run with PyTorch 2.9.0, but it’s not guaranteed to be fully supported and several incompatibility bugs have been reported. Use a supported PyTorch release recommended by the Transformers docs (Transformers expects PyTorch 2.x; current docs note PyTorch 2.2+/2.0+ depending on doc version) for best results. [1][2]

Details / evidence:

  • Transformers installation docs state the library is tested on PyTorch 2.x (docs show 2.2+ on main and earlier pages mention 2.0+). [1]
  • Multiple GitHub issues show users running transformers==4.57.1 with torch 2.9.0 (and 2.9.x builds) but encountering problems (torchcodec loading, tensor-parallel loading errors, and other incompatibilities or warnings from downstream packages). These indicate practical issues may arise even when import succeeds. [2][3][4]

Recommendation:

  • If you need maximum stability, use a PyTorch version explicitly listed in the Transformers documentation / tested matrix (avoid bleeding-edge torch builds). If you must use torch 2.9.0, test your model/workflow thoroughly and be prepared to fall back to a supported torch minor version if you hit the issues reported in the linked issues. [1][2][3]

Sources:

  • Transformers installation / compatibility notes. [1]
  • Hugging Face transformers GitHub issues noting usage / problems with torch 2.9.0 and transformers 4.57.1. [2][3][4]

References:
[1] Transformers installation docs (main / v4.xx).
[2] GH issue: “Cannot load LLaMA3 with transformers==4.57.1 …” (mentions PyTorch 2.9).
[3] GH issue: “Compatability with torchcodec” (user with torch 2.9.0+cu128).
[4] GH issue: “Transformers 4.57.1 not working with torch < 2.3” (discussion of torch version constraints).


Verify torch 2.9.0 compatibility—known issues with transformers 4.57.1.

vLLM 0.11.2 is compatible with torch 2.9.0, but ensure the CUDA build matches (e.g., torch==2.9.0+cu129). Transformers 4.57.1 with torch 2.9.0 has reported incompatibility issues (torchcodec loading, tensor-parallel errors) and is not in the officially tested matrix. Test this combination thoroughly before merging, or consider pinning to a tested PyTorch version (2.2+) if issues arise during integration testing.

🤖 Prompt for AI Agents
In pyproject.toml around line 21, the pinned "torch==2.9.0" may be incompatible
with transformers 4.57.1 (and requires a matching CUDA build), so update the
dependency to a compatible specification or test accordingly: either pin to a
tested PyTorch release (e.g., "torch>=2.2,<3.0" or a specific CUDA build like
"torch==2.9.0+cu129" matching your environment) and add a comment noting
required CUDA tag, or run integration tests with transformers 4.57.1 and vLLM
0.11.2 to confirm compatibility before merging; if tests fail, change the torch
pin to a known-good version and document the decision in the file.

Signed-off-by: Yi-Fu Wu <[email protected]>
@yfw yfw requested a review from a team as a code owner December 1, 2025 22:53
@github-actions
Copy link

github-actions bot commented Dec 1, 2025

✅ Submodule Fast-Forward Check Results

Check based on commit: c5c3eca (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

Signed-off-by: Yi-Fu Wu <[email protected]>
@github-actions
Copy link

github-actions bot commented Dec 1, 2025

✅ Submodule Fast-Forward Check Results

Check based on commit: ee4a84c (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

Signed-off-by: Yi-Fu Wu <[email protected]>
@github-actions
Copy link

github-actions bot commented Dec 2, 2025

✅ Submodule Fast-Forward Check Results

Check based on commit: ae0f57d (PR #1563 from yifu/vllm0112_bump)

✅ Submodules that are properly updated:

Automodel: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@yfw yfw added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants